203 research outputs found

    Legitimacy of executive compensation plans: A preliminary study of French laypersons' acceptability

    Get PDF

    RĂŽle de la Face et de l'UtilitĂ© dans l'interprĂ©tation d'EnoncĂ©s Ambigus Question/RequĂȘte ImcomprĂ©hension/DĂ©saccord

    Get PDF
    De nombreux Ă©noncĂ©s prĂ©sentent une ambiguitĂ© en ce qu'ils peuvent ĂȘtre interprĂ©tĂ©s diffĂ©rement selon que l'on choisisse leur signification directe ou indirecte. Nous nous penchons ici sur l'interprĂ©tation de deux de ces types d'Ă©noncĂ©s : les Ă©noncĂ©s pouvant ĂȘtre interprĂ©tĂ©s comme (a) des questions directes ou des requĂȘtes indirectes (e.g., “Est-ce qu'il reste du cafĂ© ?”) et (b) des demandes de prĂ©cision indirectes ou des dĂ©saccords indirects (e.g., “J'ai peur de ne pas vous suivre.”) Les prĂ©dictions de deux approches de l'interprĂ©tation d'Ă©noncĂ©s ambigus sont ici combinĂ©es et testĂ©es : (a) l'approche Gestion de la Face qui Ă©tudie plus particuliĂšrement le rĂŽle des variables interpersonelles telles que le statut, la distance affective ou la perte de face potentielle ; et (b) l'approche Utilitariste centrĂ©e sur les buts poursuivis par le locuteur au moment de l'Ă©nonciation. Nos rĂ©sultats soutiennent les prĂ©dictions, jusque lĂ  non testĂ©es, de l'approche Utilitariste et offrent de nouvelles perspectives Ă  l'approche Gestion de la Face. Many statements are ambiguous in that sense that they can be interpreted differently as a function of whether one considers their direct meaning or their indirect meaning(s). In this article, we examine two of these ambiguities : The direct question/indirect request ambiguity (e.g., “Is there any coffee left ?”) ; and the indirect disagreement/ indirect request for explanation ambiguity (e.g., “I don't follow you”). We combine and test predictions of two approaches of the interpretation of such ambiguities—the Face Management approach, which focuses on interpersonal variables such as status, affective distance or potential face threat ; and the Utilitarian Relevance approach, which focuses on the speaker's goal at the time of the enunciation. Results wholly support the untested predictions of the Utilitarian Relevance approach, and offer new perspectives on the Face Management approach

    Active Involvement, not Illusory Control, increases Risk Taking in a Gambling Game

    Get PDF
    International audienceThe research considers the influence of Choice (the possibility for the player to choose a gamble or another) and Involvement (the physical interaction with the gambling device) on risk taking in gambling games, and whether this influence is mediated by illusory control over the outcome of the gamble. Results of a laboratory experiment (n=100) show that (a) although Choice does increase illusory control, this influence does not translate in increased risk taking, and (b) whilst Involvement does increase risk taking, this effect is not mediated by illusory control. These results are discussed in relation to problem gambling, beliefs in the deployability of personal luck, and arousal approaches to risk taking

    Humans Feel Too Special for Machines to Score Their Morals

    Get PDF
    Artificial Intelligence (AI) can be harnessed to create sophisticated social and moral scoring systems —enabling people and organizations to form judgements of others at scale. However, it also poses significant ethical challenges and is, subsequently, the subject of wide debate. As these technologies are developed and governing bodies face regulatory decisions, it is crucial that we understand the attraction or resistance that people have for AI moral scoring. Across four experiments, we show that the acceptability of moral scoring by AI is related to expectations about the quality of those scores, but that expectations about quality are compromised by people's tendency to see themselves as morally peculiar. We demonstrate that people overestimate the peculiarity of their moral profile, believe that AI will neglect this peculiarity, and resist for this reason the introduction of moral scoring by AI

    Two varieties of conditionals and two kinds of defeaters help reveal two fundamental types of reasoning

    Get PDF
    Two notions from philosophical logic and linguistics are brought together and applied to the psychological study of defeasible conditional reasoning. The distinction between disabling conditions and alternative causes is shown to be a special case of Pollock's (1987) distinction between ‘rebutting' and ‘undercutting' defeaters. ‘Inferential' conditionals are shown to come in two types, one that is sensitive to rebutters, the other to undercutters. It is thus predicted and demonstrated in two experiments that the type of inferential conditional used as the major premise of conditional arguments can reverse the heretofore classic, distinctive effects of defeaters

    Humans Feel Too Special for Machines to Score Their Morals

    Get PDF
    Artificial Intelligence (AI) can be harnessed to create sophisticated social and moral scoring systems —enabling people and organizations to form judgements of others at scale. However, it also poses significant ethical challenges and is, subsequently, the subject of wide debate. As these technologies are developed and governing bodies face regulatory decisions, it is crucial that we understand the attraction or resistance that people have for AI moral scoring. Across four experiments, we show that the acceptability of moral scoring by AI is related to expectations about the quality of those scores, but that expectations about quality are compromised by people's tendency to see themselves as morally peculiar. We demonstrate that people overestimate the peculiarity of their moral profile, believe that AI will neglect this peculiarity, and resist for this reason the introduction of moral scoring by AI

    Anxiety-induced miscalculations, more than differential inhibition of intuition, explain the gender gap in cognitive reflection

    Get PDF
    The Cognitive Reflection Test (CRT) is among the most common and well-known instruments for measuring the propensity to engage reflective processing, in the context of the dual-process theory of high-level cognition. There is robust evidence that men perform better than women on this test—but we should be wary to conclude that men are more likely to engage in reflective processing than women. We consider several possible loci for the gender difference in CRT performance, and use mathematical modeling to show, across two studies, that the gender difference in CRT performance is more likely due to women making more mathematical mistakes (partially explained by their greater mathematics anxiety) than due to women being less likely to engage reflective processing. We argue as a result that we need to use gender-fair variants of the CRT, both to improve the quality of our instruments and to fulfill our social responsibility as scientists

    Experimental Assessment of Aggregation Principles in Argumentation-Enabled Collective Intelligence

    Get PDF
    On the Web, there is always a need to aggregate opinions from the crowd (as in posts, social networks, forums, etc.). Different mechanisms have been implemented to capture these opinions such as Like in Facebook, Favorite in Twitter, thumbs-up/-down, flagging, and so on. However, in more contested domains (e.g., Wikipedia, political discussion, and climate change discussion), these mechanisms are not sufficient, since they only deal with each issue independently without considering the relationships between different claims. We can view a set of conflicting arguments as a graph in which the nodes represent arguments and the arcs between these nodes represent the defeat relation. A group of people can then collectively evaluate such graphs. To do this, the group must use a rule to aggregate their individual opinions about the entire argument graph. Here we present the first experimental evaluation of different principles commonly employed by aggregation rules presented in the literature. We use randomized controlled experiments to investigate which principles people consider better at aggregating opinions under different conditions. Our analysis reveals a number of factors, not captured by traditional formal models, that play an important role in determining the efficacy of aggregation. These results help bring formal models of argumentation closer to real-world application

    How safe is safe enough? Psychological mechanisms underlying extreme safety demands for self-driving cars

    Get PDF
    Autonomous Vehicles (AVs) promise of a multi-trillion-dollar industry that revolutionizes transportation safety and convenience depends as much on overcoming the psychological barriers to their widespread use as the technological and legal challenges. The first AV-related traffic fatalities have pushed manufacturers and regulators towards decisions about how mature AV technology should be before the cars are rolled out in large numbers. We discuss the psychological factors underlying the question of how safe AVs need to be to compel consumers away from relying on the abilities of human drivers. For consumers, how safe is safe enough? Three preregistered studies (N = 4,566) reveal that the established psychological biases of algorithm aversion and the better-than-average effect leave consumers averse to adopting AVs unless the cars meet extremely potentially unrealistically high safety standards. Moreover, these biases prove stubbornly hard to overcome, and risk substantially delaying the adoption of life-saving autonomous driving technology. We end by proposing that, from a psychological perspective, the emphasis AV advocates have put on safety may be misplaced

    Bad machines corrupt good morals

    Get PDF
    Machines powered by Artificial Intelligence (AI) are now influencing the behavior of humans in ways that are both like and unlike the ways humans influence each other. In light of recent research showing that other humans can exert a strong corrupting influence on people’s ethical behavior, worry emerges about the corrupting power of AI agents. To estimate the empirical validity of these fears, we review the available evidence from behavioral science, human-computer interaction, and AI research. We propose that the main social roles through which both humans and machines can influence ethical behavior are (a) role model, (b) advisor,(c) partner, and (d) delegate. When AI agents become influencers (role models or advisors), their corrupting power may not exceed (yet) the corrupting power of humans. However, AI agents acting as enablers of unethical behavior (partners or delegates) have many characteristics that may let people reap unethical benefits while feeling good about themselves, indicating good reasons for worry. Based on these insights, we outline a research agenda that aims at providing more behavioral insights for better AI oversight
    • 

    corecore